Nextflow workflow report

[lonely_shannon] (resumed run)

Workflow execution completed successfully!
Run times
06-Sep-2023 20:25:16 - 06-Sep-2023 22:23:51 (duration: 1h 58m 35s)
  1 succeeded  
  5186 cached  
  0 ignored  
  0 failed  
Nextflow command
nextflow /dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/code/02_SPEAQeasy/SPEAQeasy/main.nf --sample paired --reference hg38 --strand reverse --strand_mode declare --annotation /dcs04/lieber/lcolladotor/annotationFiles_LIBD001/SPEAQeasy/Annotation -with-report /dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/processed-data/02_SPEAQeasy/execution_reports/02-run_pipeline.html -w /dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/processed-data/02_SPEAQeasy/work --input /dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/processed-data/02_SPEAQeasy/ --output /dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/processed-data/02_SPEAQeasy/pipeline_output --experiment living_brain_reanalysis -profile jhpce -resume
CPU-Hours
13'440.4 (100% cached)
Launch directory
/dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/code/02_SPEAQeasy
Work directory
/dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/processed-data/02_SPEAQeasy/work
Project directory
/dcs05/lieber/lcolladotor/living_brain_LIBD001/living_brain_reanalysis/code/02_SPEAQeasy/SPEAQeasy
Script name
main.nf
Script ID
49b4bdd1ef304bae7bf0a9dac5b8e854
Workflow session
ac1a144e-366c-4ac8-9124-94432f7d79d3
Workflow profile
jhpce
Nextflow version
version 20.01.0, build 5264 (12-02-2020 10:14 UTC)

Resource Usage

These plots give an overview of the distribution of resource usage for each process.

CPU

Memory

Job Duration

I/O

Tasks

This table shows information about each task in the workflow. Use the search box on the right to filter rows for specific values. Clicking headers will sort the table by that value and scrolling side to side will reveal more columns.

(tasks table omitted because the dataset is too big)